176 research outputs found

    Metamagnetic transitions and anomalous magnetoresistance in EuAg4_4As2_2 single crystal

    Full text link
    In this paper, the magnetic and transport properties were systematically studied for EuAg4_4As2_2 single crystals, crystallizing in a centrosymmetric trigonal CaCu4_4P2_2 type structure. It was confirmed that two magnetic transitions occur at T\textit{T}N1_{N1} = 10 K and T\textit{T}N2_{N2} = 15 K, respectively. With the increasing field, the two transitions are noticeably driven to lower temperature. At low temperatures, applying a magnetic field in the ab\textit{ab} plane induces two successive metamagnetic transitions. For both H\textit{H} ∥\parallel ab\textit{ab} and H\textit{H} ∥\parallel c\textit{c}, EuAg4_4As2_2 shows a positive, unexpected large magnetoresistance (up to 202\%) at low fields below 10 K, and a large negative magnetoresistance (up to -78\%) at high fields/intermediate temperatures. Such anomalous field dependence of magnetoresistance may have potential application in the future magnetic sensors. Finally, the magnetic phase diagrams of EuAg4_{4}As2_{2} were constructed for both H\textit{H} ∥\parallel ab\textit{ab} and H\textit{H} ∥\parallel c\textit{c}

    Reference Matters: Benchmarking Factual Error Correction for Dialogue Summarization with Fine-grained Evaluation Framework

    Full text link
    Factuality is important to dialogue summarization. Factual error correction (FEC) of model-generated summaries is one way to improve factuality. Current FEC evaluation that relies on factuality metrics is not reliable and detailed enough. To address this problem, we are the first to manually annotate a FEC dataset for dialogue summarization containing 4000 items and propose FERRANTI, a fine-grained evaluation framework based on reference correction that automatically evaluates the performance of FEC models on different error categories. Using this evaluation framework, we conduct sufficient experiments with FEC approaches under a variety of settings and find the best training modes and significant differences in the performance of the existing approaches on different factual error categories.Comment: Accepted to ACL 2023 Main Conferenc

    How Well Do Large Language Models Understand Syntax? An Evaluation by Asking Natural Language Questions

    Full text link
    While recent advancements in large language models (LLMs) bring us closer to achieving artificial general intelligence, the question persists: Do LLMs truly understand language, or do they merely mimic comprehension through pattern recognition? This study seeks to explore this question through the lens of syntax, a crucial component of sentence comprehension. Adopting a natural language question-answering (Q&A) scheme, we craft questions targeting nine syntactic knowledge points that are most closely related to sentence comprehension. Experiments conducted on 24 LLMs suggest that most have a limited grasp of syntactic knowledge, exhibiting notable discrepancies across different syntactic knowledge points. In particular, questions involving prepositional phrase attachment pose the greatest challenge, whereas those concerning adjectival modifier and indirect object are relatively easier for LLMs to handle. Furthermore, a case study on the training dynamics of the LLMs reveals that the majority of syntactic knowledge is learned during the initial stages of training, hinting that simply increasing the number of training tokens may not be the `silver bullet' for improving the comprehension ability of LLMs.Comment: 20 pages, 6 figure

    Mining Density Contrast Subgraphs

    Full text link
    Dense subgraph discovery is a key primitive in many graph mining applications, such as detecting communities in social networks and mining gene correlation from biological data. Most studies on dense subgraph mining only deal with one graph. However, in many applications, we have more than one graph describing relations among a same group of entities. In this paper, given two graphs sharing the same set of vertices, we investigate the problem of detecting subgraphs that contrast the most with respect to density. We call such subgraphs Density Contrast Subgraphs, or DCS in short. Two widely used graph density measures, average degree and graph affinity, are considered. For both density measures, mining DCS is equivalent to mining the densest subgraph from a "difference" graph, which may have both positive and negative edge weights. Due to the existence of negative edge weights, existing dense subgraph detection algorithms cannot identify the subgraph we need. We prove the computational hardness of mining DCS under the two graph density measures and develop efficient algorithms to find DCS. We also conduct extensive experiments on several real-world datasets to evaluate our algorithms. The experimental results show that our algorithms are both effective and efficient.Comment: Full version of an ICDE'18 pape

    Distantly-Supervised Named Entity Recognition with Adaptive Teacher Learning and Fine-grained Student Ensemble

    Full text link
    Distantly-Supervised Named Entity Recognition (DS-NER) effectively alleviates the data scarcity problem in NER by automatically generating training samples. Unfortunately, the distant supervision may induce noisy labels, thus undermining the robustness of the learned models and restricting the practical application. To relieve this problem, recent works adopt self-training teacher-student frameworks to gradually refine the training labels and improve the generalization ability of NER models. However, we argue that the performance of the current self-training frameworks for DS-NER is severely underestimated by their plain designs, including both inadequate student learning and coarse-grained teacher updating. Therefore, in this paper, we make the first attempt to alleviate these issues by proposing: (1) adaptive teacher learning comprised of joint training of two teacher-student networks and considering both consistent and inconsistent predictions between two teachers, thus promoting comprehensive student learning. (2) fine-grained student ensemble that updates each fragment of the teacher model with a temporal moving average of the corresponding fragment of the student, which enhances consistent predictions on each model fragment against noise. To verify the effectiveness of our proposed method, we conduct experiments on four DS-NER datasets. The experimental results demonstrate that our method significantly surpasses previous SOTA methods.Comment: Accepted at AAAI 202
    • …
    corecore